A Human-Centered Approach to One-Shot Gesture Learning
نویسندگان
چکیده
This article discusses the problem of one-shot gesture recognition using a humancentered approach and its potential application to fields such as human–robot interaction where the user’s intentions are indicated through spontaneous gesturing (one shot). Casual users have limited time to learn the gestures interface, which makes one-shot recognition an attractive alternative to interface customization. In the aim of natural interaction with machines, a framework must be developed to include the ability of humans to understand gestures from a single observation. Previous approaches to oneshot gesture recognition have relied heavily on statistical and data-mining-based solutions and have ignored the mechanisms that are used by humans to perceive and execute gestures and that can provide valuable context information. This omission has led to suboptimal solutions. The focus of this study is on the process that leads to the realization of a gesture, rather than on the gesture itself. In this case, context involves the way in which humans produce gestures—the kinematic and anthropometric characteristics. In the method presented here, the strategy is to generate a data set of realistic samples based on features extracted from a single gesture sample. These features, called the “gist of a gesture,” are considered to represent what humans remember when seeing a gesture and, later, the cognitive process involved when trying to replicate it. By adding meaningful variability to these features, a large training data set is created while preserving the fundamental structure of the original gesture. The availability of a large data set of realistic samples allows the use of training classifiers for future recognition. The performance of the method is evaluated using different lexicons, and its efficiency is compared with that of traditional N-shot learning approaches. The strength of the approach is further illustrated through human and machine recognition of gestures performed by a dual-arm robotic platform.
منابع مشابه
Recognizing Unfamiliar Gestures for Human-Robot Interaction Through Zero-Shot Learning
Human communication is highly multimodal, including speech, gesture, gaze, facial expressions, and body language. Robots serving as human teammates must act on such multimodal communicative inputs from humans, even when the message may not be clear from any single modality. In this paper, we explore a method for achieving increased understanding of complex, situated communications by leveraging...
متن کاملOne-shot-learning Gesture Recognition Using Hog-hof Features Bachelor Thesis One-shot-learning Gesture Recognition Using Hog-hof Features Bachelor Thesis Názov: One-shot-learning Gesture Recognition Using Hog-hof Features
The purpose of this thesis is to describe one-shot-learning gesture recognition systems developed on the ChaLearn Gesture Dataset [3]. We use RGB and depth images and combine appearance (Histograms of Oriented Gradients) and motion descriptors (Histogram of Optical Flow) for parallel temporal segmentation and recognition. The Quadratic-Chi distance family is used to measure differences between ...
متن کاملDomain-Adaptive Discriminative One-Shot Learning of Gestures
The objective of this paper is to recognize gestures in videos – both localizing the gesture and classifying it into one of multiple classes. We show that the performance of a gesture classifier learnt from a single (strongly supervised) training example can be boosted significantly using a ‘reservoir’ of weakly supervised gesture examples (and that the performance exceeds learning from the one...
متن کاملFirst International Workshop on Adaptive Shot Learning for Gesture Understanding and Production ASL4GUP 2017 Held in conjunction with IEEE FG 2017, in May 30, 2017, Washington DC, USA
As human-robot collaboration methodologies develop robots need to adapt fast learning methods in domestic scenarios. The paper presents a novel approach to learn associations between the human hand gestures and the robot’s manipulation actions. The role of the robot is to operate as an assistant to the user. In this context we propose a supervised learning framework to explore the gesture-actio...
متن کاملPrincipal motion components for gesture recognition using a single-exampleThis work was partially supported by ChaLearn: The Challenges in Machine Learning Organization, http://www.chalearn.org/, whose directors are gratefully acknowledged.
This paper introduces principal motion components (PMC), a new method for one-shot gesture recognition. In the considered scenario a single trainingvideo is available for each gesture to be recognized, which limits the application of traditional techniques (e.g., HMMs). In PMC, a 2D map of motion energy is obtained per each pair of consecutive frames in a video. Motion maps associated to a vide...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Front. Robotics and AI
دوره 2017 شماره
صفحات -
تاریخ انتشار 2017